Toxicology and Applied Pharmacology
○ Elsevier BV
Preprints posted in the last 7 days, ranked by how well they match Toxicology and Applied Pharmacology's content profile, based on 13 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit.
Yousafzai, O.; Kanwal, K.; Annie, F. H.; Rinehart, S.
Show abstract
Abstract Background: Despite widespread adoption of contemporary guideline-directed medical therapy (GDMT), patients with heart failure with reduced ejection fraction (HFrEF) continue to experience substantial residual morbidity and mortality. Glucagon-like peptide-1 receptor agonists (GLP-1RAs) have demonstrated cardiometabolic benefits in diabetes and obesity, but their role in HFrEF remains uncertain. Objectives: To evaluate whether the addition of GLP-1RAs to optimized GDMT is associated with improved clinical outcomes in patients with HFrEF (NYHA class II-IV). Methods: We conducted a retrospective, multicenter cohort study using the TriNetX Research Network. Adults ([≥]18 years) with HFrEF (LVEF [≤]40%) receiving GDMT between January 2020 and October 2024 were included. Patients treated with GLP-1RAs were compared with those on GDMT alone. After 1:1 propensity score matching, 1,518 patients were included in each cohort. Outcomes over 2 years included all-cause mortality, major adverse cardiovascular events (MACE), critical care utilization, and acute kidney failure. Time-to-event analyses were performed using Kaplan-Meier methods and Cox proportional hazards models. Results: In the matched cohort (mean age [~]63 years, [~]33% female), GLP-1RA use was associated with significantly lower all-cause mortality compared with GDMT alone (12.8% vs 23.8%; hazard ratio [HR] 0.48; 95% CI 0.40-0.57; p<0.001), corresponding to an absolute risk reduction of 11.0%. MACE was also reduced (35.8% vs 47.4%; HR 0.64; 95% CI 0.58-0.72; p<0.001). Additionally, GLP-1RA therapy was associated with lower critical care utilization (18.4% vs 28.9%; HR 0.55; 95% CI 0.47-0.64; p<0.001) and reduced acute kidney failure (29.2% vs 37.3%; HR 0.67; 95% CI 0.59-0.76; p<0.001). Rates of pancreatitis and substance-related disorders were low and not significantly different between groups. Conclusions: Among patients with HFrEF receiving contemporary GDMT, adjunctive GLP-1RA therapy was associated with significant reductions in mortality, cardiovascular events, and healthcare utilization. These findings support the potential role of GLP-1RAs as a novel, mechanism-complementary therapy in HFrEF. Prospective randomized trials are needed to confirm these observations and determine whether GLP-1RAs should be incorporated as a fifth pillar of GDMT.
Fahed, G.; Cauwenberghs, N.; Santana, E. J.; Chen, R.; Celestin, B. E.; Gomes Botelho Quintas, B. F.; Short, S.; Carroll, M.; Miyoshi, T.; Alexander, K. M.; Shah, S. H.; Orr, S. S.; Kovacs, A.; Daubert, M. A.; Kuznetsova, T.; Addetia, K.; Asch, F. M.; Mahaffey, K. W.; Douglas, P. S.; Haddad, F.
Show abstract
Background: Among cardiac measures, diastolic parameters demonstrate the earliest and most consistent age-related changes. This can be leveraged to develop a continuous left ventricular (LV) Diastolic Age from routine echocardiographic parameters. Analogous to how epigenetic clocks weight molecular markers against mortality risk, we calibrated Diastolic Age by weighting echocardiographic features against the validated PREVENT-Heart Failure (HF) risk score. Methods: We analyzed 1,952 participants from the Project Baseline Health Study (median age 50 [36-64] years, 54% female). The measure was derived using partial least-squares regression anchored on PREVENT-HF and calibrated within a healthy reference subgroup. External validation was performed in the WASE (n=1,708) and Stanford Cardiovascular Aging (n=313) cohorts. Associations with ASE-defined LV diastolic dysfunction (LVDD), epigenetic clocks, and major adverse cardiovascular events (MACE) were examined. Results: Diastolic Age correlated strongly with chronological age (r=0.78) with robust external validation (WASE r=0.76; Stanford r=0.82; calibration slopes {approx}1.0). It increased progressively across grades of diastolic dysfunction and discriminated LVDD with an AUC of 0.89 (95% CI 0.87-0.92), and was independently associated with hypertension, diabetes, and elevated C-reactive protein. While correlated with the Levine (r=0.76) and Horvath (r=0.41) epigenetic clocks, residual analyses indicated that Diastolic Age captures a distinct cardiac-specific dimension of biological aging. Over median follow-up of 4.2 years, it independently predicted MACE (HR 2.30, 95% CI 1.70-3.18), with accelerated diastolic aging across all age groups among those with events. Discrimination was comparable to ASE-defined LVDD (C-index 0.83 vs. 0.82). Conclusion: Diastolic Age provides a continuous, echocardiography-derived measure of cardiac biological aging that complements categorical diastolic grading and epigenetic aging clocks, and independently predicts cardiovascular outcomes.
Brophy, J. M.
Show abstract
ObjectiveTo explore the interpretation of unexpected results from a randomized controlled trial (RCT). Study Design and SettingAdjunctive frequentist (power and type{square}M error) and Bayesian analyses were performed on a recently published RCT reporting a statistically significant relative risk reduction (p <0.01) for caffeinated coffee drinkers compared with abstinence on atrial fibrillation (AF) recurrence. Individual patient data for the Bayesian survival models were reconstructed from the RCT published material and priors informed by the RCT power calculations. ResultsThe original RCT design had limited power for realistic effect sizes, increasing susceptibility to type{square}M (magnitude) error. Bayesian analyses also tempered the benefit for caffeinated coffee implied by standard statistical analysis resulting in only modest probabilities of clinically meaningful risk reductions (e.g., hazard ratio < 0.9 of 88% or a risk difference > 2% of 82%). ConclusionsSupplemental frequentist and Bayesian approaches can provide robustness checks for unexpected RCT findings, providing contextualization, clarifying distinctions between statistical and clinical significance, and guiding replication needs. HighlightsO_LIRandomized controlled trial (RCT) results may be unexpected and challenge prior beliefs C_LIO_LISupplemental frequentist and Bayesian analyses can clarify interpretation of surprising findings C_LIO_LIPower and type{square}M error assessments help evaluate design adequacy for realistic effects C_LIO_LIBayesian posterior probabilities provide additional nuanced insights into contextulaization and clinical significance C_LI
Goldman, A.; Nguyen, M.; Lanoix, J.; Li, C.; Fahmy, A.; Zhong Xu, Y.; Schurr, E.; Thibault, P.; Desjardins, M.; McBride, H.
Show abstract
Altered iron homeostasis has long been implicated in Parkinson's Disease (PD), although the mechanisms have not been clear. Given the critical role of PD-related activating mutations in LRRK2 (leucine-rich repeat protein kinase 2) within membrane trafficking pathways we examined the impact of a homozygous mutant LRRK2G2019S on iron homeostasis within the RAW macrophage cell line with high iron capacity. Proteomics analysis revealed a dysregulation of iron-related proteins in steady state with highly elevated levels of ferritin light chain and a reduction of ferritin heavy chain. LRRK2G2019S mutant cells showed efficient ferritinophagy upon iron chelation, but upon iron overload there was a near complete block in the degradation of the ferritinophagy adaptor NCOA4. These conditions lead to an accumulation of phosphorylated Rab8 at the plasma membrane, which is selectively inhibited by LRRK type II kinase inhibitors. Iron overload then leads to increased oxidative stress and ferroptotic cell death. These data implicate LRRK2 as a key regulator of iron homeostasis and point to the need for an increased focus on the mechanisms of iron dysregulation in PD.
Joachimbauer, A.; Perez-Shibayama, C. I.; Payne, E.; Hanka, I.; Stadler, R.; Papadopoulou, I.; Rickli, H.; Maeder, M. T.; Borst, O.; Zdanyte, M.; Cooper, L.; Flatz, L.; Matter, C. M.; Wilzeck, V. C.; Manka, R.; Saguner, A. M.; Ruschitzka, F.; Schmidt, D.; Ludewig, B.; Gil-Cruz, C. D. C.
Show abstract
Background and Aims: Acute myocarditis (AM) is a T cell-mediated myocardial disease with clinical manifestations ranging from mild chest pain to cardiogenic shock. Reliable biomarkers to stratify patients and guide therapy are currently lacking. In particular, the extent of the dysregulation of inflammatory pathways, and the impact on myocardial dysfunction, remain elusive. Methods: Serum analyses were performed in prospectively recruited AM patients (n = 103) from two independent cohorts. Multimodal data integration combining profiling of cytokine and chemokine dysregulation with clinical biomarkers was used to define clinical phenotypes with distinct inflammatory signatures. Machine-learning and regression models were applied to determine biomarkers that indicate clinical severity. Results: Immuno-proteomic profiling revealed conserved inflammatory patterns across AM cohorts, dominated by T cell-related cytokines and chemokines. In addition, AM patients showed dysregulation of fibroblast-derived cytokines, including hepatocyte growth factor (HGF), bone morphogenic protein 4 (BMP4) and the BMP4 inhibitors Gremlin-1 (GREM1) and Gremlin-2 (GREM2). Data integration and unsupervised clustering revealed two immuno-clinical phenotypes, linking T cell activation and fibroblast dysregulation to disease severity. Machine learning-based analysis identified CXCL10, GREM2 and LVEF as critical parameters for stratifying disease severity. Conclusions: These findings highlight a systemic T cell activation signature as diagnostic hallmark of AM. In addition, dysregulation of fibroblast-derived tissue cytokines serves as an indicator for distinct immuno-clinical phenotypes in myocardial inflammatory disease. Thus, the clinically relevant link between T cell-driven immune activation, myocardial inflammation and fibroblast-driven remodelling provides a versatile set of parameters to identify severe manifestations of AM.
Hiatt, L.; Peterson, E. V.; Happ, H. C.; Major-Mincer, J.; Avvaru, A.; Goclowski, C. L.; Garretson, A.; Sasani, T. A.; Hotaling, J. M.; Neklason, D. W.; Uchida, A. M.; Quinlan, A. R.
Show abstract
Colorectal cancer (CRC) is the second leading cause of cancer death globally and the number one cause of cancer death in people under 50 years old. The reasons for the rise of early-onset CRC are unknown, and while anatomically distinct subtypes of CRC have substantial clinical and molecular associations, the etiology of region-specific disease, such as early-onset CRC's enrichment in the distal colon, remains unclear. Understanding regional mutagenesis may identify risk factors for this public health concern and CRC more broadly. To evaluate mutational dynamics across the premalignant colon, we performed whole-genome sequencing of 125 individual colon crypts taken from six standardized regions biopsied during colonoscopy, collected from 11 donors without polyps and 10 with polyps. We observed mutation spectra and accumulation rates consistent with previous whole-organ studies, with greater subclonal mutation capture enabled by experimental design. T>[A,C,G] mutations, which are associated with colibactin genotoxicity from pks+ Escherichia coli, were significantly enriched in the rectum of donors with and without polyps (adjusted p-values < 0.01). Moreover, when comparing findings to crypts from individuals with CRC and sequenced CRC tumors, we observed consistent enrichment of the colibactin-associated mutational signature "ID18" in the rectum in both normal colon crypts and CRC tumors, without significant difference in colibactin-specific single nucleotide variant or insertion-deletion burden in crypts across the three clinical groups (i.e., no polyp, polyp, and CRC). These findings argue against a causal or prognostic role for colibactin in CRC, instead indicating that the proposed association with early-onset disease reflects anatomic specificity rather than cancer-specific clinical relevance.
Liu, Y.; Foguet, C.; Ben-Eghan, C.; Persyn, E.; Richards, M.; Wu, Z.; Lambert, S. A.; Butterworth, A. S.; Wood, A.; Di Angelantonio, E.; Inouye, M.; Ritchie, S. C.
Show abstract
Background and Aims Despite treatment, patients with established atherosclerotic cardiovascular disease (ASCVD) are at high risk of recurrent events. Existing clinical risk scores for recurrence provide only moderate predictive performance and rely largely on the same conventional risk factors used to predict disease onset. Proteomics is a promising source of new biomarkers but the technologies need focused use cases in order to achieve utility and implementation. We aimed to determine whether plasma proteomics improves prediction of recurrent cardiovascular events beyond established clinical risk models in secondary prevention in a population-scale cohort. Methods Plasma proteomic profiles from ~9,300 participants in the UK Biobank with established ASCVD at baseline were analysed using machine learning methods to derive and evaluate proteomic predictors of recurrent cardiovascular events. The top performing model comprised proteins with non-zero weights (full protein score). Predictive performance of the proteomic predictors, an established clinical risk score (SMART2), and their combination was evaluated across six pre-defined testing datasets representing multiple ethnic and geographic groups. A parsimonious set of proteins with existing clinical-grade enzyme-linked immunosorbent assays (ELISAs) available was then derived. Results The full protein score achieved higher performance for recurrent ASCVD than the SMART2 risk score across all ethnic and geographic subgroups (mean C-index 0.743 vs 0.653). Adding the full protein score to SMART2 improved discrimination, with the largest increase in White Irish participants ({Delta}C-index, 0.140; 95% CI, 0.074-0.205; P<0.001). However, adding SMART2 to the protein score provided minimal additional value. The parsimonious score preserved most of the discrimination of the full protein model with C-indices of the recurrent ASCVD risk model comprising age, sex and the parsimonious protein score being nearly identical to the full protein model in the largest testing set (0.723 vs 0.728 for White British in England and Wales). The parsimonious protein score showed a marked gradient of risk with the top, middle and bottom quintiles showing 10-year recurrent ASCVD rates of ~27.4%, ~9.6% and ~2.4%, respectively. Conclusions In patients with established ASCVD, plasma protein measurements substantially improved prediction of recurrent events beyond conventional clinical risk factors, supporting their potential as a complementary tool to guide secondary prevention of cardiovascular disease.
Yu, J.; Tillema, S.; Akel, M.; Aron, A.; Espinosa, E.; Fisher, S. A.; Branche, T. N.; Mithal, L. B.; Hartmann, E. M.
Show abstract
Benzalkonium chloride (BAC) is widely used as a disinfectant in cleaning products and is frequently detected in indoor dust. In this study, we assessed dust samples, along with information on cleaning product use, from 24 pregnant participants. Dust samples were analyzed for BAC concentration and microbial tolerance. Different chain lengths of BAC (C12, C14, and C16) were quantified using LC-MS/MS, and bacterial isolates were tested for BAC tolerance using minimum inhibitory concentration (MIC) assays. BAC was ubiquitously detected, with C12 and C14 being dominant. Higher BAC concentrations were associated with reported disinfectant use and increased microbial tolerance. These findings suggest that indoor antimicrobial use may promote microbial resistance, highlighting potential exposure risks in indoor environments and the need for further investigation into health and ecological impacts.
Atzenhoefer, M.; Nelson, B.; Atzenhoefer, T. E.; Staudacher, M.; Boxwala, H.; Iqbal, F. M.
Show abstract
Aims: Responses to remote pulmonary artery pressure data vary across programs. We evaluated SMART-HF, a structured pulmonary artery diastolic pressure (PAD)-guided workflow, in a community heart failure cohort. Methods: We retrospectively analysed adults with heart failure and an implanted pulmonary artery pressure sensor managed with SMART-HF. Pulmonary artery diastolic pressure (PAD) was calculated from prespecified 14-day windows at baseline, 90 days, and 6 months. Two hemodynamic management performance indices (HMPI) were prespecified: the 6-Month Delta HMPI (PAD reduction >2 mmHg from baseline) and the 90-Day Target HMPI (PAD [≤]20 mmHg at 90 days). Exploratory analyses evaluated patients with baseline PAD >20 mmHg. Results: Of 37 patients, 36 had paired 90-day and 29 had paired 6-month windows. Mean PAD decreased from 18.3 +/- 7.0 to 16.1 +/- 6.3 mmHg at 90 days and from 18.8 +/- 6.8 to 15.5 +/- 5.8 mmHg at 6 months (both P < 0.001). The 90-Day Target HMPI was achieved in 26/36 (72.2%) and the 6-Month Delta HMPI in 19/29 (65.5%) [95% CI 45.7-82.1]. In the exploratory subgroup (baseline PAD >20 mmHg), mean PAD changes were -2.9 +/- 3.6 mmHg at 90 days (n = 19; P = 0.002) and -4.9 +/- 4.9 mmHg at 6 months (n = 15; P = 0.002). Conclusions: SMART-HF was associated with improved ambulatory pulmonary artery diastolic pressure control at 90 days and 6 months. Exploratory subgroup findings support further evaluation in patients with elevated baseline pulmonary artery diastolic pressure.
Villar-Valero, J.; Nebot, L.; Soto-Iglesias, D.; Falasconi, G.; Berruezo, A.; Boukens, B. J. D.; Trenor, B.; Gomez, J. F.
Show abstract
BackgroundSympathetic modulation via the stellate ganglia is increasingly recognized as a contributor to ventricular arrhythmogenesis after myocardial infarction. However, the mechanisms by which autonomic remodeling interacts with chronic infarct substrates to shape arrhythmic vulnerability remain incompletely understood. ObjectivesTo test the hypothesis that left- and right-sided stellate ganglion-mediated SNS modulation differentially reshapes ventricular arrhythmic vulnerability in chronic post-infarcted substrates, and that the RVI detects changes in vulnerability beyond conventional stimulation-based inducibility. MethodsFourteen patient-specific ventricular models with chronic post-infarcted remodeling were reconstructed from imaging data. A total of 336 simulations were performed under different combinations of stellate ganglion modulation, border zone remodeling, and fibroblast density. Arrhythmic vulnerability was quantified using 3D RVI mapping during paced rhythms and compared with conventional stimulation-based inducibility outcomes. ResultsStellate ganglion modulation induced marked, regionally heterogeneous changes in repolarization timing, resulting in lower and more negative RVI values in vulnerable regions. More negative RVI values reflect increased propensity for wavefront-waveback interaction and reentry initiation. Across the cohort, stellate modulation consistently decreased RVImin, even when inducibility outcomes remained unchanged. These findings indicate that SNS modulation can create a substrate more permissive to reentry independently of whether ventricular arrhythmia is triggered during programmed stimulation. ConclusionsStellate ganglion-mediated sympathetic modulation dynamically reshapes ventricular arrhythmic vulnerability in chronic post-infarcted substrates. RVI provides a spatially resolved, vulnerability-based metric that complements inducibility testing by revealing autonomic-substrate interactions underlying arrhythmogenesis Condensed AbstractSympathetic modulation via the stellate ganglia can alter ventricular repolarization and promote arrhythmogenesis after myocardial infarction, yet clinical responses remain heterogeneous. Using 14 patient-specific post-infarction ventricular models, we simulated left- and right-sided stellate modulation across combinations of border zone remodeling and fibrosis (336 simulations). Stellate modulation induced regionally heterogeneous repolarization shortening and reduced RVI values, even when programmed stimulation inducibility remained unchanged. These findings suggest that RVI captures substrate-level vulnerability beyond binary induction testing and may improve mechanistic assessment of autonomic-substrate interactions in chronic infarct substrates.
Quide, Y.; Lim, T. E.; Gustin, S. M.
Show abstract
BackgroundEarly-life adversity (ELA) is a risk factor for enduring pain in youth and is associated with alterations in brain morphology and function. However, it remains unclear whether ELA-related neurobiological changes contribute to the development of enduring pain in early adolescence. MethodsUsing data from the Adolescent Brain Cognitive Development (ABCD) Study, we examined multimodal magnetic resonance imaging (MRI) markers in children assessed at baseline (ages 9-11 years) and at 2-year follow-up (ages 11-13 years). ELA exposure was defined at baseline to maximise temporal separation between early adversity and later enduring pain. Participants with enduring pain at follow-up (n = 322) were compared to matched pain-free controls (n = 644). Structural MRI, diffusion MRI (fractional anisotropy, mean diffusivity), and resting-state functional connectivity data were analysed. Linear models tested main effects of enduring pain, ELA, and their interaction on brain metrics, controlling for relevant covariates. ResultsELA exposure was associated with smaller caudate and nucleus accumbens volumes, and reduced surface area of the left rostral middle frontal gyrus. No significant effects of enduring pain or ELA-by-enduring pain interaction were observed across grey matter, white matter, or functional connectivity measures. ConclusionsELA was associated with alterations in fronto-striatal regions in late childhood, but these changes were not linked to enduring pain in early adolescence. These findings suggest that ELA-related neurobiological alterations may represent early markers of vulnerability rather than concurrent correlates of enduring pain. Longitudinal follow-up is needed to determine whether these alterations contribute to later chronic pain risk.
Spann, D. J.; Hall, L. M.; Moussa-Tooks, A.; Sheffield, J. M.
Show abstract
BackgroundNegative symptoms are core features of schizophrenia that relate strongly to functional impairment, yet interventions targeting these symptoms remain largely ineffective. Emerging theoretical work highlights how environmental factors may shape and maintain negative symptoms. Although racial disparities in schizophrenia diagnosis among Black Americans are well documented and linked to racial stress and psychosis, the impact of racial stress on negative symptoms has not been examined. This study provides an initial test of a novel theory proposing that racial stress - here measured by racial discrimination - influences negative symptom severity through exacerbation of negative cognitions about the self, particularly defeatist performance beliefs (DPB). Study DesignParticipants diagnosed with schizophrenia-spectrum disorder (SSD) (N = 208; 80 Black, 128 White) completed the Positive and Negative Syndrome Scale (PANSS), the Defeatist Beliefs Scale, and self-report measures of subjective racial and ethnic discrimination (Racial and Ethnic Minority Scale and General Ethnic Discrimination Scale). Relationships among variables were tested using linear regression and mediation analysis. Study ResultsBlack participants exhibited significantly greater total and experiential negative symptoms than White participants with no group difference in DPB. Racial discrimination explained 46% of the relationship between race and negative symptoms. Among Black participants, higher DPB were associated with greater negative symptom severity. Discrimination was positively related to both DPB and negative symptoms. DPB partially mediated the relationship between discrimination and negative symptoms. ConclusionsFindings suggest that racial stress contributes to negative symptom severity via defeatist beliefs among Black individuals, highlighting potential targets for culturally informed interventions.
Xu, J.; Parker, R. M. A.; Bowman, K.; Clayton, G. L.; Lawlor, D. A.
Show abstract
Background Higher levels of sedentary behaviour, such as leisure screen time (LST), and lower levels of physical activity are associated with diseases across multiple body systems which contribute to a large global health burden. Whether these associations are causal is unclear. The primary aim of this study is to investigate the causal effects of higher LST (given greater power) and, secondarily, lower moderate-to-vigorous intensity physical activity (MVPA), on a wide range of diseases in a hypothesis-free approach. Methods A two-sample Mendelian randomisation phenome-wide association study was conducted for the main analyses. Genetic single nucleotide polymorphisms (SNPs) were first selected as exposure genetic instruments for LST (hours of television watched per day; 117 SNPs) and MVPA (higher vs. lower; 18 SNPs) based on the genome-wide significant threshold (p < 5*10-8) from the largest relevant genome-wide association study (GWAS). For disease outcomes, we used summary results from FinnGen GWAS, including 1,719 diseases defined by hospital discharge International Classification of Diseases (ICD) codes in 453,733 European participants. For the main analyses, we used the inverse-variance weighting method with a Bonferroni corrected p-value of p [≤] 3.47*10-4. Sensitivity analyses included Steiger filtering, MR-Egger and weighted median analyses, and data from UK Biobank were used to explore replication. Findings Genetically predicted higher LST was associated with increased risk of 87 (5.1% of the 1,719) diseases. Most of these diseases were in musculoskeletal and connective tissue (n=37), genitourinary (n=12) and respiratory (n=8) systems. Genetic liability to lower MVPA was associated with six diseases: three in musculoskeletal and connective tissue and genitourinary systems (with greater risk of these diseases also identified with higher LST), and three in respiratory and genitourinary systems. Sensitivity analyses largely supported the main analyses. Results replicated in UK Biobank, where data available. Conclusions Higher levels of sedentary behaviour, and lower levels of physical activity, causally increase the risk of diseases across multiple body systems, making them promising targets for reducing multimorbidity.
Harikumar, A.; Baker, B.; Amen, D.; Keator, D.; Calhoun, V. D.
Show abstract
Single photon emission computed tomography (SPECT) is a highly specialized imaging modality that enables measurement of regional cerebral perfusion and, in particular, resting cerebral blood flow (rCBF). Recent technological advances have improved SPECT quantification and reliability, making it increasingly useful for studying rCBF abnormalities and perfusion network alterations in psychiatric and neurological disorders. To characterize large scale functional organization in SPECT data, data driven decomposition methods such as independent component analysis (ICA) have been used to extract covarying perfusion patterns that map onto interpretable brain networks. Blind ICA provides a data driven approach to estimate these networks without strong prior assumptions. More recently, a hybrid approach that leverages spatial priors to guide a spatially constrained ICA (sc ICA) have been used to fully automate the ICA analysis while also providing participant-specific network estimates. While this has been reliably demonstrated in fMRI with the NeuroMark template, there is currently no comparable SPECT template. A SPECT template would enable automatic estimation of functional SPECT networks with participant-specific expressions that correspond across participants and studies. The current study introduces a new replicable NeuroMark SPECT template for estimating canonical perfusion covariance patterns (networks). We first identify replicable SPECT networks using blind ICA applied to two large sample SPECT datasets. We then demonstrate the use of the resulting template by applying sc-ICA to an independent schizophrenia dataset. In sum, this work presents and shares the first NeuroMark SPECT template and demonstrating its utility in an independent cohort, providing a scalable and robust framework for network-based analyses.
Hudu, S.; Uthman, K.; Katuala, Y.; Bello, I. W.; Mbuyi, Y.; Worku, D. T.; Mbelani, S. C.; Adjaho, I. I.; Gignoux, E.; Doumbia, C. O.; Ale, F.; Polonsky, J.
Show abstract
Background Nigeria has experienced its largest recorded diphtheria outbreak since late 2022, centred on Kano State, where facility-based surveillance documented over 25,000 confirmed cases. The true community burden remains unknown. We conducted a population-based household survey to estimate community attack rates, mortality, vaccination coverage, and determinants of infection and death. Methods We performed a retrospective household survey (September-October 2024) using spatially randomised cluster sampling (65 clusters, ~15 households each; recall period January 2023 to interview). Survey-weighted analyses, multivariable logistic regression, and sensitivity analyses were used. Findings We enrolled 7,998 individuals from 1,068 households. The community attack rate was 1.1% (95% CI 0.7-1.4), 4.2 times (2.7-5.3) higher than facility-based estimates. The case fatality ratio was 8.8% (1.9-15.6) overall and 21.3% among children under five; two thirds of deaths occurred at home. Delayed care-seeking of four or more days was associated with markedly higher mortality (risk ratio 32.6, 95% CI 2.4-450.0). Vaccination was strongly protective against death (vaccine effectiveness 57%, 95% CI 34- 72%; E-value 4.07). Among campaign-eligible children, routine EPI coverage was 56.6%; the reactive campaign reached few previously unvaccinated children (99.7% overlap with prior recipients), leaving 11.6% of eligible children unvaccinated. Interpretation Community diphtheria burden substantially exceeded facility surveillance estimates, with most deaths occurring outside the health system. Delayed care-seeking and low vaccination coverage were the main drivers of mortality, highlighting the need for improved community surveillance, decentralised care, and better-targeted vaccination.
Pietilainen, O.; Salonsalmi, A.; Rahkonen, O.; Lahelma, E.; Lallukka, T.
Show abstract
Objectives: Longer lifespans lead to longer time on retirement, despite the efforts to raise the retirement age. Therefore, it is important to study how the retirement years can be spent without diseases. This study examined socioeconomic and sociodemographic differences in healthy years spent on retirement. Methods: We followed a cohort of retired Finnish municipal employees (N=4231, average follow-up 15.4 years) on national administrative registers for major chronic diseases: cancer, coronary heart disease, cerebrovascular disease, diabetes, asthma or chronic obstructive pulmonary disease, dementia, mental disorders, and alcohol-related disorders. Median healthy years on retirement and age at first occurrence of illness (ICD-10 and ATC-based) in each combination of sex, occupational class, and age of retirement were predicted using Royston-Parmar models. Prevalence rates for each diagnostic group were calculated. Results: Most healthy years on retirement were spent by women having worked in semi-professional jobs who retired at age 60-62 (median predicted healthy years 11.6, 95% CI 10.4-12.7). The least healthy years on retirement were spent by men having worked in routine non-manual jobs who retired after age 62 (median predicted healthy years 6.5, 95% CI 4.4-9.5). Diabetes was slightly more common among lower occupational class women, and dementia among manual working women having retired at age 60-62. Discussion: Healthy years on retirement are not enjoyed equally by women and men and those who retire early or later. Policies aiming to increase the retirement age should consider the effects of these gaps on retirees and the equitability of those effects.
Ukah, C. E.; Tendongfor, N.; Hubbard, A.; Tanue, E. A.; Oke, R.; Bassah, N.; Yunika, L. K.; Ngu, C. N.; Christie, S. A.; Nsagha, D. S.; Chichom-Mefire, A.; Juillard, C.
Show abstract
BackgroundCommercial motorcycle riders are among the most vulnerable road users in low- and middle-income countries and contribute substantially to the burden of road traffic injuries. The use of personal protective equipment (PPE), including helmets and protective clothing, reduces injury severity; however, uptake remains suboptimal. This study evaluated the effectiveness of a theory-driven health education intervention in improving knowledge, attitudes, and use of PPE among commercial motorcycle riders in Cameroon. MethodsA quasi-experimental, non-randomized controlled before-and-after study was conducted in Limbe (intervention) and Tiko (control) Health Districts between August 4, 2024, and April 6, 2025. Participants were recruited from a cohort of commercial motorcycle riders and followed over an eight-month intervention period. The intervention, guided by the Health Belief Model and developed using the Intervention Mapping framework, combined face-to-face sensitization sessions with mobile phone-based educational messaging adapted to participants literacy levels and communication preferences. Data were collected at baseline and endline using structured questionnaires and direct observation checklists. Intervention effects were estimated using difference-in-differences analysis with generalized estimating equations, adjusting for socio-demographic factors. ResultsA total of 313 riders were enrolled at baseline (183 intervention, 130 control), with 249 retained at endline (149 intervention, 100 control). The intervention was associated with significant improvements in PPE knowledge ({beta} = 2.91; 95% CI: 2.14-3.68; p < 0.001) and attitudes ({beta} = 5.76; 95% CI: 4.32-7.21; p < 0.001) compared with the control group. No statistically significant effect was observed for PPE practice scores ({beta} = 0.21; 95% CI: -0.09-0.52; p = 0.171). Among individual PPE items, helmet use increased significantly in the intervention group relative to the control group (AOR = 2.38; 95% CI: 1.19-9.45; p = 0.036), while no significant effects were observed for gloves, trousers, eyeglasses, or closed-toe shoes. ConclusionThe theory-driven health education intervention significantly improved knowledge and attitudes toward PPE and increased helmet use among commercial motorcycle riders but did not lead to broader improvements in the uptake of other protective equipment. These findings highlight the need for complementary structural and policy interventions to address persistent barriers to PPE use in similar low-resource settings. Trial registrationClinicalTrials.gov Identifier: NCT07087444 (registered July 28, 2025, retrospectively)
Hung, J.; Smith, A.
Show abstract
The global ambition to end the human immunodeficiency virus (HIV) epidemic requires understanding which system-level policy levers, enacted under the framework of Universal Health Coverage (UHC), are most effective in achieving both transmission reduction and diagnostic coverage. This study addresses an important evidence gap by quantifying the within-country association between measurable UHC policy indicators and the estimated rate of new HIV infections across nine Southeast Asian countries between 2013 and 2022. Employing a Fixed-Effects panel data methodology, the analysis controls for time-invariant national heterogeneity, ensuring reliable estimates of policy impact. We found that marginal changes in total current health expenditure (CHE) as a percentage of gross domestic product (GDP) were not statistically significantly associated with changes in HIV incidence. However, increases in the UHC Infectious Disease Service Coverage Index were statistically significantly associated with concurrent reductions in HIV incidence (p < 0.001), suggesting the efficacy of targeted service implementation as the principal driver of curbing new HIV infections. In addition, the UHC Reproductive, Maternal, Newborn, and Child Health Service Coverage Index exhibited a statistically significant positive association with changes in HIV incidence (p < 0.01), which is interpreted as a vital surveillance artefact resulting from expanded detection and reporting of previously undiagnosed HIV cases. Furthermore, out-of-pocket (OOP) health expenditure as a percentage of CHE showed a counter-intuitive negative association with changes in HIV incidence (p < 0.01), suggesting this metric primarily shows ongoing indirect cost burdens on the established patient cohort, or, alternatively, presents a diagnostic access barrier that results in lower case finding. These findings suggest that policymakers should prioritise investment in targeted infectious disease service efficacy over aggregate fiscal commitment and utilise integrated sexual health platforms for strengthened HIV surveillance and case identification.
Nguyen, D.; ONeill, C.; Akaraci, S.; Tate, C.; Wang, R.; Garcia, L.; Kee, F.; Hunter, R. F.
Show abstract
HighlightsO_LIHealth inequalities have widened over 15 years, favouring high-income groups C_LIO_LIInequality in physical activity & mental health widened the most pre-intervention C_LIO_LIPost-intervention, inequalities persisted but stayed relatively unchanged. C_LIO_LILong-term illness and unemployment were key drivers of inequality C_LIO_LIThe greenway may have slowed down the inequality widening but the impact is limited C_LI BackgroundEvidence concerning health inequalities following urban green and blue space UGBS) interventions is limited. This study examined the changes in health inequalities after a major urban regeneration project, the Connswater Community Greenway (CCG), in Belfast, UK. MethodCross-sectional household surveys were conducted in 2010/11 (baseline), 2017/18 (immediately after completion), and 2023/24 (long-term follow-up) with a sample of approximately 1,000 adults each wave. Using concentration indices (CI), income-related health inequalities for three outcomes (physical activity, mental wellbeing and quality of life) were measured. A regression-based decomposition of concentration index examined the contribution of sociodemographic factors to the observed inequalities underpinning each outcome over time. ResultsAcross three waves, there was widening of inequalities over the 15-year period across all three health outcomes, with those from high-income groups reported higher levels of physical activity (CI=0.33, SE=0.026), better mental wellbeing (CI=0.03, SE=0.003), and better quality of life (CI=0.09, SE=0.008). The widening inequalities mainly occurred during the construction phase of CCG (2010-2017) and remained stable post-intervention (2017-2023). Decomposition analysis revealed that the pro-poor concentration of long-term illness and unemployment was the key driver that together explained approximately 51%-76% of the inequalities. ConclusionThe CCG was limited in reducing health inequalities which were mainly driven by long-term illness and unemployment - factors beyond the direct scope of the UGBS intervention - resulting in low-income groups likely to fall further behind the wealthier groups. The widening of inequality is consistent with findings from other public interventions that did not have a primary equity focus.
Malingumu, E. E.; Badaga, I.; Kisendi, D. D.; Pierre Kabore, R. W.; Yeremon, O. G.; Mohamed, M. A.; He, Q.
Show abstract
This study evaluates the feasibility of implementing artificial intelligence (AI)-driven disease surveillance systems at Julius Nyerere International Airport (JNIA) in Tanzania, a key hub for regional and international travel. Through a mixed-methods approach combining qualitative interviews and quantitative surveys, the research assesses the infrastructure, human resource capacity, and regulatory frameworks necessary for AI integration. Findings indicate that while Port Health Officers are strongly optimistic about AIs potential to enhance disease detection, the airport faces significant barriers, including outdated infrastructure, insufficient technical resources, and a lack of trained personnel. Ethical and privacy concerns, particularly surrounding data security, also emerged as key challenges, compounded by limited public awareness and the socio-cultural acceptability of AI systems. Furthermore, the study identifies gaps in national policies and inter-agency coordination that hinder the effective implementation of AI technologies. The research concludes that while current conditions render AI adoption infeasible, strategic investments in infrastructure, workforce training, and policy development could pave the way for future integration, enhancing public health surveillance at JNIA and potentially other airports in low- and middle-income countries. This study contributes critical insights into the barriers and opportunities for AI-driven disease surveillance in low-resource settings, specifically focusing on a high-priority transit point, international airports. It emphasizes the importance of region-specific solutions to enhance health security in East Africa and supports the broader global health agenda by advocating for international collaboration and the development of scalable disease surveillance systems. Future research should explore pilot AI implementations at other airports to evaluate real-world challenges and refine AI systems for broader applicability, including cost-effectiveness analyses and integration of public perspectives on AI.